Skip to content

feat: chat UI#1112

Draft
leonardmq wants to merge 6 commits intoleonard/kil-447-feat-stream-multiturn-ai-sdk-openai-protocolsfrom
leonard/kil-449-feat-svelte-ui-for-chatbot
Draft

feat: chat UI#1112
leonardmq wants to merge 6 commits intoleonard/kil-447-feat-stream-multiturn-ai-sdk-openai-protocolsfrom
leonard/kil-449-feat-svelte-ui-for-chatbot

Conversation

@leonardmq
Copy link
Collaborator

What does this PR do?

Vercel AI SDK has libraries for Svelte, but we use Svelte 4, and those libraries require >= Svelte 5. An ancient version of the library exists for Svelte 4, but it is tied to an older version of the protocol so would require our backend / Kiln SDK to also be on an old protocol version.

Changes:

  • add Chat UI

Checklists

  • Tests have been run locally and passed
  • New tests have been added to any work in /lib

@leonardmq leonardmq marked this pull request as draft March 10, 2026 09:44
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 10, 2026

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 5a93542e-3733-483a-a704-ae1a5f44ba18

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch leonard/kil-449-feat-svelte-ui-for-chatbot

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link

github-actions bot commented Mar 10, 2026

📊 Coverage Report

Overall Coverage: 90%

Diff: origin/leonard/kil-447-feat-stream-multiturn-ai-sdk-openai-protocols...HEAD

  • app/desktop/desktop_server.py (100%)
  • app/desktop/studio_server/chat_api.py (80.7%): Missing lines 32-42,56-57,83-84,124-125,137-139,143
  • libs/core/kiln_ai/adapters/chat/chat_formatter.py (87.5%): Missing lines 278
  • libs/core/kiln_ai/adapters/model_adapters/base_adapter.py (65.2%): Missing lines 696,736-738,746-749
  • libs/core/kiln_ai/adapters/model_adapters/litellm_adapter.py (66.7%): Missing lines 702-703
  • libs/core/kiln_ai/adapters/model_adapters/stream_events.py (100%)
  • libs/core/kiln_ai/datamodel/tool_id.py (92.3%): Missing lines 89
  • libs/core/kiln_ai/tools/client_tool.py (96.9%): Missing lines 57
  • libs/core/kiln_ai/tools/tool_registry.py (100%)

Summary

  • Total: 197 lines
  • Missing: 34 lines
  • Coverage: 82%

Line-by-line

View line-by-line diff coverage

app/desktop/studio_server/chat_api.py

Lines 28-46

  28 
  29 
  30 def _find_task_run_by_id(task_run_id: str) -> TaskRun | None:
  31     """Search all projects and tasks for a task run with the given ID."""
! 32     project_paths = Config.shared().projects or []
! 33     for project_path in project_paths:
! 34         try:
! 35             project = Project.load_from_file(project_path)
! 36         except Exception:
! 37             continue
! 38         for task in project.tasks():
! 39             run = TaskRun.from_id_and_parent_path(task_run_id, task.path)
! 40             if run is not None:
! 41                 return run
! 42     return None
  43 
  44 
  45 def _execute_client_tool(tool_name: str, arguments: dict[str, Any]) -> str:
  46     """Execute a client-side tool and return the result as a string."""

Lines 52-61

  52             run = _find_task_run_by_id(task_run_id)
  53             if run is None:
  54                 return json.dumps({"error": f"Task run not found: {task_run_id}"})
  55             return run.model_dump_json(indent=2)
! 56         except Exception as e:
! 57             return json.dumps({"error": f"Failed to read task run: {e}"})
  58     return json.dumps({"error": f"Unknown client tool: {tool_name}"})
  59 
  60 
  61 def _parse_sse_events(

Lines 79-88

  79                         and event.get("type") == "client-tool-call"
  80                     ):
  81                         client_tool_event = event
  82                         continue
! 83                 except (json.JSONDecodeError, TypeError):
! 84                     pass
  85         lines_to_forward.append(line)
  86 
  87     return lines_to_forward, client_tool_event

Lines 120-129

  120                                     detail = (
  121                                         json.loads(error_body).get("message", detail)
  122                                         or detail
  123                                     )
! 124                                 except json.JSONDecodeError:
! 125                                     pass
  126                             yield f"data: {json.dumps({'type': 'error', 'message': detail})}\n\n".encode()
  127                             return
  128 
  129                         try:

Lines 133-147

  133                                     client_tool_event = tool_event
  134                                 forward_bytes = b"\n".join(lines)
  135                                 if forward_bytes.strip():
  136                                     yield forward_bytes + b"\n"
! 137                         except httpx.RemoteProtocolError:
! 138                             if client_tool_event is not None:
! 139                                 logger.debug(
  140                                     "Connection closed after client tool call event (expected)"
  141                                 )
  142                             else:
! 143                                 raise
  144 
  145                 if client_tool_event is None:
  146                     return

libs/core/kiln_ai/adapters/chat/chat_formatter.py

Lines 274-282

  274         self._prior_trace = prior_trace
  275 
  276     def _is_tool_continuation(self) -> bool:
  277         if not self._prior_trace:
! 278             return False
  279         last = self._prior_trace[-1]
  280         return isinstance(last, dict) and last.get("role") == "tool"
  281 
  282     def initial_messages(self) -> list[ChatCompletionMessageIncludingLiteLLM]:

libs/core/kiln_ai/adapters/model_adapters/base_adapter.py

Lines 692-700

  692     @property
  693     def task_run(self) -> TaskRun:
  694         if self._task_run is None:
  695             if self.client_tool_pending:
! 696                 raise RuntimeError(
  697                     "No task_run available: stream ended with a client tool call. "
  698                     "Check .client_tool_pending before accessing .task_run"
  699                 )
  700             raise RuntimeError(

Lines 732-742

  732                     elif isinstance(event, ToolCallEvent):
  733                         last_event_was_tool_call = True
  734                         for ai_event in converter.convert_tool_event(event):
  735                             yield ai_event
! 736             except ClientToolCallRequired as e:
! 737                 self.client_tool_pending = True
! 738                 yield AiSdkStreamEvent(
  739                     AiSdkEventType.CLIENT_TOOL_CALL,
  740                     {
  741                         "toolCallId": e.tool_call_id,
  742                         "toolName": e.tool_name,

Lines 742-753

  742                         "toolName": e.tool_name,
  743                         "input": e.arguments,
  744                     },
  745                 )
! 746                 for ai_event in converter.finalize():
! 747                     yield ai_event
! 748                 yield AiSdkStreamEvent(AiSdkEventType.FINISH_STEP)
! 749                 return
  750 
  751             for ai_event in converter.finalize():
  752                 yield ai_event

libs/core/kiln_ai/adapters/model_adapters/litellm_adapter.py

Lines 698-707

  698 
  699                 try:
  700                     result = await t.run(c, **args)
  701                 except ClientToolCallRequired as e:
! 702                     e.tool_call_id = tc_id
! 703                     raise
  704                 return ChatCompletionToolMessageParamWrapper(
  705                     role="tool",
  706                     tool_call_id=tc_id,
  707                     content=result.output,

libs/core/kiln_ai/datamodel/tool_id.py

Lines 85-93

  85     # Client tools: client_tool::<tool_name>
  86     if id.startswith(CLIENT_TOOL_ID_PREFIX):
  87         tool_name = client_tool_name_from_id(id)
  88         if not tool_name:
! 89             raise ValueError(
  90                 f"Invalid client tool ID: {id}. Expected format: 'client_tool::<tool_name>'."
  91             )
  92         return id

libs/core/kiln_ai/tools/client_tool.py

Lines 53-61

  53     async def name(self) -> str:
  54         return self._name
  55 
  56     async def description(self) -> str:
! 57         return self._description
  58 
  59     async def toolcall_definition(self) -> ToolCallDefinition:
  60         return {
  61             "type": "function",


@leonardmq
Copy link
Collaborator Author

Bug:

  • after scrolling past max height, scroll view happens; when clicking to expand a past toolcall / reasoning block higher up in the scrollview, it scrolls back down to the max

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a new, fully functional chat interface within the application. The core motivation for this feature was to provide an interactive AI chat experience, which necessitated a custom streaming implementation due to existing framework version constraints. The new UI integrates robust markdown rendering with security features, ensuring a rich and safe display of AI-generated content.

Highlights

  • New Chat UI Implementation: A new chat user interface has been added, providing a dedicated page for interactive conversations. This includes message display, input handling, and real-time streaming of assistant responses.
  • Custom Streaming Chat Logic: A custom Server-Sent Events (SSE) streaming chat mechanism was developed to handle AI SDK protocol JSON events. This was necessary due to incompatibility between the Vercel AI SDK libraries (requiring Svelte 5) and the project's current Svelte 4 version.
  • Markdown Rendering and Sanitization: A new Svelte component (ChatMarkdown.svelte) was introduced to render markdown content, incorporating marked for parsing, DOMPurify for HTML sanitization, and highlight.js for syntax highlighting of code blocks.
  • Dependency Updates: New dependencies dompurify, marked, and @types/dompurify were added to support markdown rendering and HTML sanitization. Existing dompurify and marked packages were also updated in package-lock.json.
  • Navigation and Styling Enhancements: A 'Chat' link was added to the main application navigation, and new CSS animations (thinking-dot) were introduced to enhance the chat UI's visual feedback during streaming.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • app/web_ui/package-lock.json
    • Added dompurify and marked dependencies.
    • Updated dompurify from 3.3.0 to 3.3.2.
    • Updated marked from 4.3.0 to 17.0.4.
    • Added @types/dompurify dependency.
  • app/web_ui/package.json
    • Added dompurify as a dependency.
    • Added marked as a dependency.
    • Added @types/dompurify as a dev dependency.
  • app/web_ui/src/app.css
    • Added @keyframes thinking-dot animation for visual feedback.
    • Added .thinking-dot class to apply the animation.
  • app/web_ui/src/lib/chat/ChatMarkdown.svelte
    • Added a new Svelte component to render markdown content.
    • Implemented markdown parsing using marked.
    • Integrated DOMPurify for HTML sanitization.
    • Configured highlight.js for syntax highlighting of code blocks.
  • app/web_ui/src/lib/chat/streaming_chat.ts
    • Added a new TypeScript file for custom SSE streaming chat logic.
    • Defined interfaces for ChatMessagePart, ChatMessage, BackendChatRequest, and StreamEvent.
    • Implemented streamChat function to handle SSE parsing and message updates.
  • app/web_ui/src/lib/chat_api_url.ts
    • Added a new TypeScript file to define the chat API endpoint URL, configurable via environment variables.
  • app/web_ui/src/lib/ui/icons/arrow_up_icon.svelte
    • Added a new Svelte component for an SVG arrow-up icon.
  • app/web_ui/src/lib/ui/icons/stop_icon.svelte
    • Added a new Svelte component for an SVG stop icon.
  • app/web_ui/src/routes/(app)/+layout.svelte
    • Updated the Section enum to include a Chat option.
    • Added logic to determine the active navigation section for /chat routes.
    • Added a new navigation link for 'Chat' in the sidebar menu.
  • app/web_ui/src/routes/(app)/chat/+page.svelte
    • Added the main Svelte page for the chat user interface.
    • Implemented message display, user input handling, and streaming integration.
    • Included logic for collapsing/expanding reasoning and tool call details.
    • Integrated ChatMarkdown component for rendering message content.
    • Added stop and send buttons with loading states and input validation.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new chat UI, including dependencies, a markdown rendering component, client-side logic for streaming chat, and the chat page itself. A critical security concern has been identified: the DOMPurify configuration is overly permissive, allowing attributes like class and target which can be exploited for UI redressing and phishing attacks. Additionally, raw error messages from the backend API are exposed, potentially leading to information leakage. Beyond security, there are also opportunities to improve code maintainability by reducing redundancy.

"span",
"div",
]
const ALLOWED_ATTR = ["href", "target", "rel", "class"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-high high

The DOMPurify configuration allows the class attribute on all allowed tags. When using a utility-first CSS framework like Tailwind CSS, this allows an attacker to inject arbitrary utility classes into the rendered markdown. An attacker can use this to perform UI redressing, such as overlaying fake buttons, hiding parts of the UI, or mimicking system notifications, which can lead to phishing or other social engineering attacks.

"span",
"div",
]
const ALLOWED_ATTR = ["href", "target", "rel", "class"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

The DOMPurify configuration allows the target attribute on <a> tags. An attacker can use this to set target="_self" or target="_top", allowing them to navigate the current tab or the entire window to a malicious site when the user clicks a link in the chat. This can be used for phishing attacks where the attacker mimics the application's UI on their own site.

const text = await response.text()
onError(
new Error(
`Chat API error ${response.status}: ${text || response.statusText}`,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

The streamChat function captures the raw response body from the API when a request fails and includes it in the Error object passed to the onError callback. This error message is then displayed directly in the UI in +page.svelte. If the backend API returns sensitive information (e.g., system paths, stack traces, or internal configuration) in the error response, it will be exposed to the user.

Comment on lines +11 to +20
hljs.registerLanguage("json", json)
hljs.registerLanguage("javascript", javascript)
hljs.registerLanguage("js", javascript)
hljs.registerLanguage("typescript", typescript)
hljs.registerLanguage("ts", typescript)
hljs.registerLanguage("python", python)
hljs.registerLanguage("py", python)
hljs.registerLanguage("bash", bash)
hljs.registerLanguage("shell", bash)
hljs.registerLanguage("sh", bash)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The language packs for highlight.js often include common aliases. For example, the javascript language pack includes the js alias, typescript includes ts, python includes py, and bash includes sh. Registering these aliases explicitly is redundant. You can simplify this section by removing the redundant registerLanguage calls.

  hljs.registerLanguage("json", json)
  hljs.registerLanguage("javascript", javascript)
  hljs.registerLanguage("typescript", typescript)
  hljs.registerLanguage("python", python)
  hljs.registerLanguage("bash", bash)
  hljs.registerLanguage("shell", bash)

Comment on lines +207 to +212
if (currentTextId === null) {
const id = nextSlotId()
partOrder.push({ kind: "text", id })
currentTextId = id
textBlocks.set(id, "")
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic to create a new text block when currentTextId is null is duplicated in multiple places (here, and in the text-delta handler). A similar duplication exists for reasoning blocks. This makes the code harder to maintain.

Consider refactoring this logic into a helper function. For example, a function like ensureCurrentTextBlock() could encapsulate this block creation logic and be called wherever needed. This would reduce code duplication and make the stream processing logic easier to follow.

@leonardmq leonardmq force-pushed the leonard/kil-449-feat-svelte-ui-for-chatbot branch from 2bfe312 to cdd99bd Compare March 11, 2026 10:24
@leonardmq leonardmq force-pushed the leonard/kil-449-feat-svelte-ui-for-chatbot branch from d411063 to 0bbab04 Compare March 12, 2026 10:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant